AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-GPU Efficient Training

# Multi-GPU Efficient Training

Whisper Tiny Fleurs
Apache-2.0
An audio classification model fine-tuned based on openai/whisper-tiny, achieving 87% accuracy on the evaluation set
Audio Classification Transformers
W
Pablex
3
0
Qwen2 0.5B Reward
Apache-2.0
A reward model fine-tuned based on Qwen/Qwen2-0.5B-Instruct, used to evaluate and optimize the quality of generated content
Large Language Model Transformers
Q
trl-lib
916
1
Roberta Base Japanese With Auto Jumanpp
A Japanese pretrained model based on RoBERTa architecture, supporting automatic Juman++ tokenization, suitable for Japanese natural language processing tasks.
Large Language Model Transformers Japanese
R
nlp-waseda
536
8
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase